1. IntroductionImaging spectrometers collect the target spatial-spectral datacube, which can be used in many areas, such as biomedicine,[1,2] environment monitoring,[3] geology,[4] and remote sensing.[5] Most of the conventional imaging spectrometers rely on scanning in either the spatial domain[6] or the spectral domain[7,8] to acquire the full datacube. The scanning process causes motion artifacts when observing dynamic targets such as a moving car. In order to overcome this limitation, snapshot imaging spectrometers[9] have been developed to acquire the datacube simultaneously. There are different designs of snapshot spectrometers, such as, the non-scanning computed-tomography imaging spectrometer (CTIS),[10,11] the image replicating spectrometer (IRIS),[12,13] the image mapping spectrometer (IMS),[14–16] the coded aperture snapshot spectral imager (CASSI),[17–20] multispectral Sagnac interferometry (MSI),[21] and the light field modulated imaging spectrometer (LFMIS).[22–25] The approaches such as CTIS, CASSI, IRIS, and MSI employ a complex computational strategy to calculate the datacube and have to deal with calibration difficulty, computational complexity, and measurement artifacts.[9,26] Both IMS and LFMIS are direct measurement strategies,[26] and LFMIS has a more compact structure.
Horstmeyer et al. introduced an LFMIS system to acquire the spectrum, polarization state, and intensity of targets simultaneously by placing an array of filters to divide the objective aperture of a pinhole plenoptic camera.[22] Meng et al. presented a multispectral plenoptic camera based on four bandpass filters.[24] Su et al. implemented a linear variable filter (LVF) in a microlens-based plenoptic camera.[25] Yuan et al. presented a diffraction model of the LVF-based LFMIS to analyze the spectral resolution of a system.[27] For spectral data reconstruction, there are several approaches to using bandpass filters in the system. Horstmeyer averaged contiguous pixels which were corresponding to the same spectral filter.[22] Cavanaugh directly abstracted the pixel which had the maximum response to the spectral filter.[23] This method was also used by Tkaczyk for reconstructing the spectral data of IMS.[15] Meng considered the multiplexing of a spectrum and restored the spectrum by a demultiplexing approach,[24] and compared all three methods.
The LFMIS systems desribed in Refs. [22]–[24] were all based on bandpass filters, of which the spectral characters determine the center wavelengths and bandwidths of system spectral channels. However, for the LVF-based system, the center wavelengths and bandwidths of spectral channels are undetermined. Yuan et al used a collimated monochromatic light setup,[27] which is similar to the one used by Tkaczyk for calibrating IMS,[15] to calibrate the spectral parameters. This process can only calibrate a microlens at a time. It is time consuming to calibrate the entire system. Furthermore, the quality of reconstructed spectral data has to be further improved. In the present research, we present a spectral multiplexing model of the LFMIS system, basing the analysis on the aliasing of spectral channels. A calibration setup is proposed for fast calibration of the characters of spectral channels of the LVF-based LFMIS system. A scheme is proposed for calculating the multiplexing matrices from calibrated data. A spectral reconstruction algorithm considering the error of the calibration data is introduced to improve the quality of the restored spectrum. In Section 3, simulation is performed to evaluate the performances of the algorithms. In Section 4, a prototype LFMIS is calibrated based on the proposed calibration scheme. The restored spectra of different color blocks from the experimental data confirm the effectiveness of the proposed algorithm.
2. Theory2.1. Spectral multiplexing model of the systemThe fore-optics, which can be any general paraxial imaging system, is simplified into a thin main lens with a spectral-filter-array (SFA) placed at the pupil aperture denoted by the coordinates (x, y). The target is imaged onto the microlens array (MLA) plane. The distance between the target and the main lens is z1, and the distance between the MLA plane and the main lens plane is z2. The sensor is placed in the back focal plane of the MLA.
As shown in Fig. 1, the light from the target travels through a filter in the aperture of the main lens SFA, and is imaged by a microlens on the sensor. In an ideal case, the light passing through a spectral filter is incident on a single pixel as shown in Fig. 2(a). However, as shown in Fig. 2(b), the image of a single filter is spread on the adjacent pixels due to diffraction. Furthermore, as shown in Fig. 2(b), a filter is imaged on several pixels due to misalignment between the MLA and the sensor. As a result, a pixel receives light from multiple spectral channels.
The radiance passing through the j-th filter channel with central wavelength λj is noted as Ij. The i-th pixel covered by an arbitrary microlens receives the ai,j portion of the entire radiance Ij. Therefore, the intensity of a pixel is given as
Then the spectral multiplexing model of an arbitrary microlens is given as
where
N is the number of spectral channels,
H is the number of pixels covered by a microlens, and
ni is the noise of the
i-th pixel. The simplification of the above equation is
where
A is noted as the spectral multiplexing matrix of an arbitrary microlens, matrix
D is the raw data recorded by the sensor pixels, while
Io, = [
I1, …,
Ij, …,
IH]
T is the spectral radiance of the target. Also,
n1 = [
n1, …,
ni, …,
nH]
T is the noise matrix of the observed data due to system noise, which are the readout noise and quantization noise of sensor.
2.2. Calibration of spectral multiplexing matrixIn order to reconstruct the spectrum of the target, the spectral multiplexing matrix A is required. In practice, the coefficients ai,j of matrix A are determined by the calibration process. For the system based on bandpass filters,[22–24] the matrix coefficient ai,j can be calibrated by using a light source with the same wavelength and bandwidth as those of the bandpass filters, and is given as the following equation:
where,
dj(
λi) is the pixel response at center wavelength
λi, and
H is the number of pixels covered by an arbitrary microlens.
For the system[25] based on the LVF, the spectral channels should be calibrated before calculating the matrix. Each microlens images the LVF on the sensor to cover several pixels as shown in Fig. 3. The number of spectral channels is the number of pixels occupied by the LVF image along the wavelength direction.
In order to obtain the spectral response characteristics that determine the spectral characteristics of the channel, we introduce a calibrating experiment setup, as shown in Fig. 4. The monochromator adjusts the wavelength of the light that is incident into the integrating sphere. The integrating sphere provides a uniform monochromatic surface light source that can fulfill the entire field of view of the LFMIS system. Then all the microlenses are illuminated and the sensor can record all the sub-images at an arbitrary wavelength simultaneously.
The calibration procedure is summarized in the following steps. 1) Adjust the monochromator to generate the output light with central wavelength λ and bandwidth δλ. 2) Take the LFMIS spectral image of the uniform monochromatic scene. 3) Repeat Step 1) and Step 2) to scan the spectral range of the system by increasing the central wavelength δλ for each step. 4) Abstract intensities of a pixel at different wavelengths to generate the spectral response distribution of this pixel.
Figure 5 shows the measured spectral responses of two adjacent pixels of a prototype LFMIS as well as the fitted Gaussian curves. Here, the fitted central wavelength and the full-width at half-maximum (FWHM) of the pixel response act as the central wavelength and bandwidth of the corresponding spectral channel.
Once the spectral channel characteristics are determined, we can calculate the matrix coefficient. The pixel response at one spectral channel is an integral of the responses obtained at different narrow band monochromatic light, as given below.
where
dj(
λi +
k ·
δλ) is the pixel response at wavelength
λi +
k ·
δλ, and
M = ⌈Δ
λi/2
δλ⌉. Here, we notice that the irradiance of the monochromatic light varies with wavelength. Therefore, as shown in Fig.
4, a spectroradiometer (ASD Inc.) is used to monitor the irradiance
L(
λ) of the monochromatic light in the calibration process. Therefore, the spectral matrix coefficient
ai,j is calculated from the following equation:
where
L(
λi +
k ·
δλ) is the irradiance of the monochromatic light. The calculation processes are repeated for each microlens to determine the corresponding spectral multiplexing matrix.
2.3. Spectral reconstruction algorithmOnce the spectral multiplexing matrix A is obtained, we can reconstruct the target spectrum by using demultiplexing algorithms. Meng and Berkner assumed that the matrix A is accurate and all the errors are caused by the noise n1 of D. Then they proposed a demultiplexing algorithm[24] to solve Eq. (3) by taking the pseudoinverse of the calibrated response matrix A, i.e., Φ = (AA)−1 AH. The spectrum of the target obtained by this algorithm is given by the following equation:
We note that Meng’s algorithm serves as a direct reverse (DR) algorithm in this paper. However, in practice, the calibrated spectral multiplexing matrix
A also includes the noise matrix
n2 caused by the calibration errors and the system noise. For the exact data matrices
A0 =
A −
n2 and
D0 =
D−
n1, they satisfy the exact relation
D0 =
A0I. Meanwhile, the exact data matrices are unobservable. Therefore, equation (
3) should be considered as a total least squares (TLS) problem.
[28] The TLS method of solving Eq. (
3) is transformed into a method of solving the following problem:
where
and
are approximately the unobservable exact data matrices
A0 and
D0 respectively. The matrix
is determined by
. The noise matrices
n1 and
n2 are corresponding to the observed matrices
D and calibrated matrix
A respectively. We used the method proposed by Golub and Van Loan to solve this problem. This method
[28,29] is based on the singular value decomposition (SVD). The SVD of
A and
G = (
A,
D) are given as
where
and
Σ = diag(
σ1, …,
σK+1). For any integer
p with 1 ⩽
p ⩽
K,
Σ is divided into
Σ1 = diag(σ
1, ⋯, σ
p) and
Σ2 = diag(σ
p+1,⋯,σ
K+1), and matrix
V can be defined as
If for some integer
p ⩽
n, σ
p+1 < σ
p, and rank(
V22(
p)) = 1, then a solution of
I0 to Eq. (
3) is a solution to the consistent linear system given by Eq. (
8). Also,
and
, where
U1 is the first
p column of
U = (
U1,
U2). Using the linear least squares approach to solve Eq. (
8), we can obtain the spectrum solution as follows:
where
is Moore–Penrose generalized inverse matrix of
.
3. Simulation resultsIn this section, we perform simulation experiments to evaluate the performance of the spectrum reconstruction algorithm. An ideal spectral response matrix A0 of 32 channels is synthesized. The AVIRIS[6] spectral datacube of 32 channels from 458 nm to 788 nm in an area of 101 × 101 pixels as shown in Fig. 6(a), is employed as the input spectra for simulation. Figure 6(b) shows the synthesized spectral sub-images of target T1 and target T2 observed by an LFMIS.
All the spectra of 101 × 101 targets are used in D0 = A0I to synthesize the exact observed data. We add white Gaussian noise with signal-to-noise ratio (SNR) levels at 40 dB, 50 dB, and 60 dB to the ideal spectral matrix A0 as well as the exact observed data D0. The spectra are reconstructed by the TLS method presented in Subsection 2.3 and the DR algorithm used by Meng and Berkner.[24] In order to evaluate the performances of the algorithms, we use the criterion, a spectral angle mapper (SAM),[30] as given below.
where
MR and
are the reference spectrum and the calculated spectrum respectively. The smaller the value of SAM, the better the restored spectrum is. Figure
7 shows the reconstructed spectra of two typical targets at different noise levels. It is evident that the TLS algorithm performs better than the DR algorithm
[24] at higher noise level.
The values of SAMs of different input target spectra are different, so the average values of SAMs are used given by
where
NT is the number of target spectra. The minimum, maximum, and average value of SAMs of the reconstructed spectra are summarized in Table
1. The results shows that the TLS algorithm and the DR algorithm can reconstruct spectra accurately at a low noise level. The performances of both algorithms decrease much as the noise level increases. However, the TLS algorithm reconstructs target spectra with overall better accuracy. Figure
8 shows the distributions of SAMs of all reconstructed spectra by two algorithms at the 40-dB noise level. The relatively small range of the SAMs of the TLS algorithm shows that the TLS algorithm has better robustness than the DR algorithm does.
Table 1.
Table 1.
Table 1. SAMs of spectra reconstructed by TLS and DR algorithm at different noise levels. .
SNR level |
Method |
Minimum |
Maximum |
Average |
40 dB |
DR |
6.46° |
28.47° |
13.18° |
|
TLS |
4.73° |
10.16° |
6.34° |
50 dB |
DR |
2.46 ° |
10.35° |
3.73° |
|
TLS |
1.97° |
5.94° |
3.03° |
60 dB |
DR |
0.58° |
3.08° |
1.32° |
|
TLS |
0.57° |
3.07° |
1.29° |
| Table 1. SAMs of spectra reconstructed by TLS and DR algorithm at different noise levels. . |
4. Experimental resultsIn this section, we present the experimental results of a prototype LFMIS. The system has a fore lens consisting of a Nikon lens with a focal length F = 50 mm and f#/1.8. As shown in Fig. 9, an LVF (JDSU Inc.) is assembled at the aperture stop of the lens. The designed linear dispersion coefficient is 30 nm/mm, and the spectral percentage coefficient is 2%. The detector is a Bobcat IGV-4020 (Imperx Inc.), of which the pixel is 9-μm wide. The microlens array contains squarely arranged circular microlenses each with a 90-μm diameter and 0.54-mm focal length. A microlens covers 10 × 10 pixels.
4.1. System calibration resultsFigure 10 shows the sub-image behind an arbitrary microlens when the system is illuminated by a tungsten light source. The pixels from channel 1 to channel 8 correspond to eight spectral channels. A calibration setup is built based on the scheme shown in Fig. 4. The bandwidth of monochromatic light is approximately 3 nm. The measured sensor responses of channel 4 and channel 5 by scanning the wavelength of the monochromator can be seen in Fig. 5 in Subsection 2.2. Table 2 shows the center wavelengths and bandwidths of the channels from the measured data of Fig. 10.
Table 2.
Table 2.
Table 2. Fitted center wavelength and bandwidth from calibrated data. .
Channel i |
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
λi/nm |
464.5 |
482.4 |
501.3 |
519.3 |
538.0 |
556.3 |
575.4 |
593.1 |
Δλi/nm |
17.0 |
18.4 |
19.4 |
20.1 |
20.7 |
22.0 |
22.6 |
19.0 |
| Table 2. Fitted center wavelength and bandwidth from calibrated data. . |
The spectral coefficient aj,i of the spectral multiplexing matrix A of an arbitrary microlens is calculated based on Eq. (6) and plotted in Fig. 11. In the coefficient plot, ten pixels are contained in the same column of each of pixel shown in Fig. 10.
4.2. Spectral reconstruction resultsThe target, as shown in Fig. 12(a), is placed at a distance z1 = 1 m from the LFMIS. The reference spectra of the color blocks are measured by the same spectroradiometer used in the calibration procedure. The raw light field spectral image is shown in Fig. 12(b).
The spectra of the color blocks in Fig. 12(a) are reconstructed by three different methods. The first method is the TLS algorithm in Subsection 2.3. The second algorithm is the DR algorithm proposed by Meng. The third algorithm is the directly abstracting (DA) method.[23] A column of pixels that have the maximum responses to the corresponding spectral channels are extracted. Each pixel corresponds to the maximum response to the corresponding channel.
Figure 13 shows the reference (Ref.) and spectra restored by three algorithms. The corresponding SAM values are summarized in Table 3. The spectral curves obtained by the DA method reflect the deviation between the raw data and the reference results. The larger SAMDA means that the errors of the raw data are larger for the corresponding color. Both the demultiplexing algorithm DR and TLS can restore spectra with lower SAMs than the DA algorithm. The results also coincide with the simulation conclusion that the TLS algorithm can restore more accurate spectra than the DR algorithm can.
Table 3.
Table 3.
Table 3. SAMs corresponding to restored spectra of color blocks. .
Color blocks |
Blue |
Green |
Yellow |
Orange |
Red |
Reconstruction algorithms |
SAMTLS |
6.92° |
1.69° |
1.76° |
4.83° |
8.52° |
|
SAMDR |
8.43° |
1.95° |
4.62° |
7.07° |
13.98° |
|
SAMDA |
10.42° |
6.17° |
4.13° |
9.77° |
19.42° |
| Table 3. SAMs corresponding to restored spectra of color blocks. . |
Except the adjunction areas, each color block covers about 8 × 8 microlenses. The spectrum of each microlens is restored by the three algorithms. The average and root-mean-square error (RMS) of SAMs of the same color block are given respectively by
where
M is the number of microlenses in a color block. The minimum, average, and RMS of SAM values of different blocks are summarized in Table
4. The SAM
min and results show that the TLS algorithm can obtain better results than the other two methods. Furthermore, smaller σ
SAM also shows that the TLS algorithm performs with better consistency when the noise is different at different microlens sub-images.
Table 4.
Table 4.
Table 4. Minimum, average, and RMS values of SAMs for different blocks. .
Algorithm |
Color blocks |
Blue |
Green |
Yellow |
Orange |
Red |
TLS |
SAMmin |
6.92° |
1.69° |
1.76° |
3.62° |
8.52° |
|
SAM |
9.95° |
3.30° |
3.86° |
5.44° |
11.63° |
|
σSAM |
1.08° |
0.68° |
0.93° |
0.92° |
2.09° |
DR |
SAMmin |
7.38° |
1.80° |
2.35° |
5.22° |
12.03° |
|
SAM |
10.83° |
4.65° |
5.44° |
13.6° |
23.96° |
|
σSAM |
2.35° |
3.55° |
1.60° |
8.84° |
14.27° |
DA |
SAMmin |
10.42° |
4.96° |
3.53° |
6.60° |
13.98° |
|
SAM |
12.94° |
6.39° |
4.95° |
10.72° |
19.86° |
|
σSAM |
1.11° |
0.64° |
0.87° |
2.19° |
2.75° |
| Table 4. Minimum, average, and RMS values of SAMs for different blocks. . |
5. ConclusionsIn this paper, we introduce the linear spectral multiplexing model of a non-coherent LFMIS. We propose a calibration setup to calibrate the center wavelength and bandwidth of spectral channels of all the microlens sub-images simultaneously. The spectral multiplexing matrices are calculated based on the channel calibrating data. Since the calibrated matrices have errors due to calibration errors and the sensor noise, we introduce the TLS algorithm for the spectral reconstruction. Simulation results confirm that the TLS algorithm could restore spectra with better accuracy than the DR algorithm, especially when the noise level is high. A prototype of LFMIS based on an LVF is calibrated by using the proposed scheme to determine the spectral multiplexing matrices. The spectra of different color blocks are reconstructed from the acquired imaged data by different algorithms. The results confirm that the TLS approach has an improved performance.
In summary, the data reconstruction method proposed in this paper includes a calibration scheme and a demultiplexing algorithm. The calibration scheme can obtain the spectral characteristics and multiplexing matrices of all the microlens-subimages more efficiently. The presented demultiplexing algorithm can improve the quality of the reconstructed target spectra.